The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
最近,深度学习方法已经在许多医学图像分割任务中实现了最先进的表现。其中许多是基于卷积神经网络(CNN)。对于这种方法,编码器是从输入图像中提取全局和局部信息的关键部分。然后将提取的特征传递给解码器以预测分割。相比之下,最近的几部作品显示了使用变压器的卓越性能,可以更好地对远程空间依赖性进行建模并捕获低级细节。但是,对于某些任务无法有效替换基于卷积的编码器的某些任务,变形金刚作为唯一的编码器表现不佳。在本文中,我们提出了一个带有双重编码器的模型,用于3D生物医学图像分割。我们的模型是带有独立变压器编码器的U形CNN。我们融合了卷积编码器和变压器的信息,并将其传递给解码器以获得结果。我们从三个不同的挑战中评估了三个公共数据集上的方法:BTCV,MODA和DECHANLON。与在每个任务上有和没有变压器的最先进模型相比,我们提出的方法在整个方面都获得了更高的骰子分数。
translated by 谷歌翻译
对于许多下游任务(例如,情感分析,关系检测等),脑电图(EEG)和语言已被广泛探索。研究这两个领域的多模式方法尚未得到很好的探索,即使近年来,多模式学习被认为比单峰对应物更强大。在这项研究中,我们希望探索脑电图与语言之间的关系和依赖性,即一个领域如何反映和代表另一个领域。为了研究表示级别的关系,我们引入了MTAM(一种多模式变压器对准模型),以观察两种模式之间的协调表示,因此采用了转换表示来进行下游应用。我们使用各种关系对齐的寻求对准技术,例如规范相关性分析和Wasserstein距离,作为转化低级语言的损失函数,并将EEG特征转化为高级转化的特征。在下游应用程序,情感分析和关系检测上,我们在两个数据集(Zuco和k-emocon)上实现了新的最新结果。我们的方法在K-Emocon的情感分析中获得了16.5%的F1得分提高,对Zuco的情感分析的26.6%,以及对Zuco的关系检测的31.1%。此外,我们通过以下方式提供对性能改进的解释:(1)可视化原始特征分布和变换的特征分布,显示对齐模块发现和编码脑电图与语言之间的关系的有效性; (2)可视化单词级别和句子级的脑电图对齐权重,显示不同语言语义和脑电图频率特征的影响; (3)可视化大脑地形图,以提供有关大脑区域中脑电图和语言反应的连通性的直观演示。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
Human group detection, which splits crowd of people into groups, is an important step for video-based human social activity analysis. The core of human group detection is the human social relation representation and division.In this paper, we propose a new two-stage multi-head framework for human group detection. In the first stage, we propose a human behavior simulator head to learn the social relation feature embedding, which is self-supervisely trained by leveraging the socially grounded multi-person behavior relationship. In the second stage, based on the social relation embedding, we develop a self-attention inspired network for human group detection. Remarkable performance on two state-of-the-art large-scale benchmarks, i.e., PANDA and JRDB-Group, verifies the effectiveness of the proposed framework. Benefiting from the self-supervised social relation embedding, our method can provide promising results with very few (labeled) training data. We will release the source code to the public.
translated by 谷歌翻译
To obtain a more comprehensive activity understanding for a crowded scene, in this paper, we propose a new problem of panoramic human activity recognition (PAR), which aims to simultaneous achieve the individual action, social group activity, and global activity recognition. This is a challenging yet practical problem in real-world applications. For this problem, we develop a novel hierarchical graph neural network to progressively represent and model the multi-granularity human activities and mutual social relations for a crowd of people. We further build a benchmark to evaluate the proposed method and other existing related methods. Experimental results verify the rationality of the proposed PAR problem, the effectiveness of our method and the usefulness of the benchmark. We will release the source code and benchmark to the public for promoting the study on this problem.
translated by 谷歌翻译
多发性硬化症(MS)是一种慢性神经炎症性疾病,多模态MRIS通常用于监测MS病变。许多自动MS病变细分模型已经开发并达到了人类水平的性能。但是,大多数已建立的方法都假定在训练过程中使用的MRI模式在测试过程中也可以使用,这在临床实践中不能保证。以前,已将称为模式辍学的训练策略应用于MS病变细分,以实现最先进的性能,而缺失了模态。在本文中,我们提出了一种称为ModDrop ++的新方法,以训练统一的网络适应于任意数量的输入MRI序列。 ModDrop ++以两种关键方式升级ModDrop的主要思想。首先,我们设计一个插件动态头,并采用过滤器缩放策略来提高网络的表现力。其次,我们设计了一种共同训练策略,以利用完全模态和缺失方式之间的主体内关系。具体而言,主体内共同训练策略旨在指导动态头部在同一主题的全模式数据和缺失模式数据之间生成相似的特征表示。我们使用两个公共MS数据集来显示ModDrop ++的优势。源代码和训练有素的模型可在https://github.com/han-liu/moddropplusplus上获得。
translated by 谷歌翻译
自然图像消光是一个基本和挑战的计算机视觉任务。传统上,该问题被制定为欠暗的问题。由于问题均不含糊,因此需要对数据分布的进一步假设使得摆动良好的问题。对于古典消光方法,通常采用的假设是前景和背景颜色的局部平滑度。然而,对于基于深度学习的焊接方法,没有系统地考虑使用这种假设。在这项工作中,我们考虑了两个局部平滑度假设,可以帮助改善深层图像消光模型。基于本地平滑度假设,我们提出了三种技术,即培训集细化,颜色增强和反向化改进,可以显着提高深度图像消光模型的性能。我们进行实验以检查所提出的算法的有效性。实验结果表明,与现有的消光方法相比,该方法具有良好的性能。
translated by 谷歌翻译
自动描绘器官风险(OAR)和总肿瘤体积(GTV)对于放射治疗计划具有重要意义。然而,在有限的像素(体素)向内注释下,学习强大的描绘的强大表示是一个具有挑战性的任务。在像素级别的对比学习可以通过从未标记数据学习密集的表示来缓解对注释的依赖性。最近在该方向上的研究设计了特征图上的各种对比损失,以产生地图中每个像素的鉴别特征。然而,同一地图中的像素不可避免地共享语义,其实际上可能影响同一地图中的像素的辨别,并导致与其他地图中的像素相比。为了解决这些问题,我们提出了分离的区域级对比学习计划,即Separeg,其核心是将每个图像分离成区域并分别对每个区域进行编码。具体地,Separeg包括两个组件:结构感知图像分离(SIS)模块和器官和室内间蒸馏(IID)模块。 SIS被提出在图像集上运行以重建在结构信息的指导下设置的区域。将通过典型的对比损失交叉区域从此学习机关间代表。另一方面,提出了IID来解决设定的区域中的数量不平衡,因为通过利用器官表示,微小器官可以产生较少的区域。我们进行了广泛的实验,以评估公共数据集和两个私有数据集的提出模型。实验结果表明了拟议模型的有效性,始终如一地实现比最先进的方法更好的性能。代码可在https://github.com/jcwang123/separate_cl上获得。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译